Goto

Collaborating Authors

 criminal sentencing


How To Remove Bias From AI Models - AI Summary

#artificialintelligence

"Unfortunately, there's no way to quantify the size of this problem," said Brandon Purcell, a Forrester vice president, principal analyst, and co-author of the report, adding "… it's true that we are far from artificial general intelligence, but AI is being used to make critical decisions about people at scale today--from credit decisioning, to medical diagnoses, to criminal sentencing. These could include business leaders, lawyers, security and risk specialists, as well as activists, nonprofits, members of the community and consumers. Accounting for intersectionality or how different elements of a person's identity combine to compound the impacts of bias or privilege. "The key is in adopting best practices across the AI lifecycle from the very conception of the use case, through data understanding, modeling, evaluation, and into deployment and monitoring," Purcell said. "Unfortunately, there's no way to quantify the size of this problem," said Brandon Purcell, a Forrester vice president, principal analyst, and co-author of the report, adding "… it's true that we are far from artificial general intelligence, but AI is being used to make critical decisions about people at scale today--from credit decisioning, to medical diagnoses, to criminal sentencing.


AI and bias: Machines are less biased than people - Verdict

#artificialintelligence

We hear a lot these days about the potential dangers of "AI bias." If a machine learning system is based upon a data set that is somehow biased by age, gender, race, ethnicity, income, education, geography, or some other factor, the system's outputs will tend to reflect those biases. As the inner workings of ML systems are often impossible for an outsider to fully understand, any such biases can appear to be hidden, making them seem especially sinister. But before getting too alarmed, ask yourself this. Over the long run, which decision-making model is likely to be more objective: human or machine reasoning?


Artificial intelligence will eliminate more bias than it creates

#artificialintelligence

Lately, there has been a lot of discussion about potential biases in artificial intelligence systems. But what do we actually mean by bias? And, on balance, will AI create more or less of it? The first question doesn't get the attention it deserves. While the origins of the word bias are obscure, its root meaning revolves around some sort of slant or angle.


Will Artificial Intelligence be Too Intelligent for us? CLNews

#artificialintelligence

Artificial intelligence has the potential to save lives and give us better, faster, more personalised information. Surely we should welcome its advance with open arms? Or are there more sinister overtones to the march of AI? Should we be worried? If you have bought anything off Amazon recently you will have experienced an artificial intelligence algorithm at work. This is what you looked at recently, so you are obviously interested in buying it: this is what people with a similar profile to you have bought.


Can we trust AI if we don't know how it works?

#artificialintelligence

We're at an unprecedented point in human history where artificially intelligent machines could soon be making decisions that affect many aspects of our lives. But what if we don't know how they reached their decisions? Imagine being refused health insurance - but when you ask why, the company simply blames its risk assessment algorithm. Or if you apply for a mortgage and are refused, but the bank can't tell you exactly why. Or more seriously, if the police start arresting people on suspicion of planning a crime solely based on a predictive model informed by a data-crunching supercomputer. These are some of the scenarios the tech industry is worrying about as artificial intelligence (AI) marches inexorably onwards, infiltrating more and more aspects of our lives.


Can we trust AI if we don't know how it works?

BBC News

We're at an unprecedented point in human history where artificially intelligent machines could soon be making decisions that affect many aspects of our lives. But what if we don't know how they reached their decisions? Imagine being refused health insurance - but when you ask why, the company simply blames its risk assessment algorithm. Or if you apply for a mortgage and are refused, but the bank can't tell you exactly why. Or more seriously, if the police start arresting people on suspicion of planning a crime solely based on a predictive model informed by a data-crunching supercomputer. These are some of the scenarios the tech industry is worrying about as artificial intelligence (AI) marches inexorably onwards, infiltrating more and more aspects of our lives.